42 research outputs found

    Rapid Change in Articulatory Lip Movement Induced by Preceding Auditory Feedback during Production of Bilabial Plosives

    Get PDF
    BACKGROUND: There has been plentiful evidence of kinesthetically induced rapid compensation for unanticipated perturbation in speech articulatory movements. However, the role of auditory information in stabilizing articulation has been little studied except for the control of voice fundamental frequency, voice amplitude and vowel formant frequencies. Although the influence of auditory information on the articulatory control process is evident in unintended speech errors caused by delayed auditory feedback, the direct and immediate effect of auditory alteration on the movements of articulators has not been clarified. METHODOLOGY/PRINCIPAL FINDINGS: This work examined whether temporal changes in the auditory feedback of bilabial plosives immediately affects the subsequent lip movement. We conducted experiments with an auditory feedback alteration system that enabled us to replace or block speech sounds in real time. Participants were asked to produce the syllable /pa/ repeatedly at a constant rate. During the repetition, normal auditory feedback was interrupted, and one of three pre-recorded syllables /pa/, /Ξ¦a/, or /pi/, spoken by the same participant, was presented once at a different timing from the anticipated production onset, while no feedback was presented for subsequent repetitions. Comparisons of the labial distance trajectories under altered and normal feedback conditions indicated that the movement quickened during the short period immediately after the alteration onset, when /pa/ was presented 50 ms before the expected timing. Such change was not significant under other feedback conditions we tested. CONCLUSIONS/SIGNIFICANCE: The earlier articulation rapidly induced by the progressive auditory input suggests that a compensatory mechanism helps to maintain a constant speech rate by detecting errors between the internally predicted and actually provided auditory information associated with self movement. The timing- and context-dependent effects of feedback alteration suggest that the sensory error detection works in a temporally asymmetric window where acoustic features of the syllable to be produced may be coded

    The Effect of Macular Hole Duration on Surgical Outcomes: An Individual Participant Data Study of Randomized Controlled Trials

    Get PDF
    Topic: To define the effect of symptom duration on outcomes in people undergoing surgery for idiopathic full-thickness macular holes (iFTMHs) by means of an individual participant data (IPD) study of randomized controlled trials (RCTs). The outcomes assessed were primary iFTMH closure and postoperative best-corrected visual acuity (BCVA). Clinical Relevance: Idiopathic full-thickness macular holes are visually disabling with a prevalence of up to 0.5%. Untreated BCVA is typically reduced to 20/200. Surgery can close holes and improve vision. Symptom duration is thought to affect outcomes with surgery, but the effect is unclear. Methods: A systematic review identified eligible RCTs that included adults with iFTMH undergoing vitrectomy with gas tamponade in which symptom duration, primary iFTMH closure, and postoperative BCVA were recorded. Bibliographic databases were searched for articles published between 2000 and 2020. Individual participant data were requested from eligible studies. Results: Twenty eligible RCTs were identified. Data were requested from all studies and obtained from 12, representing 940 eyes in total. Median symptom duration was 6 months (interquartile range, 3–10). Primary closure was achieved in 81.5% of eyes. There was a linear relationship between predicted probability of closure and symptom duration. Multilevel logistic regression showed each additional month of duration was associated with 0.965 times lower odds of closure (95% confidence interval [CI], 0.935–0.996, P = 0.026). Internal limiting membrane (ILM) peeling, ILM flap use, better preoperative BCVA, face-down positioning, and smaller iFTMH size were associated with increased odds of primary closure. Median postoperative BCVA in eyes achieving primary closure was 0.48 logarithm of the minimum angle of resolution (logMAR) (20/60). Multilevel logistic regression showed for eyes achieving primary iFTMH closure, each additional month of symptom duration was associated with worsening BCVA by 0.008 logMAR units (95% CI, 0.005–0.011, P < 0.001) (i.e., ∼1 Early Treatment Diabetic Retinopathy Study letter loss per 2 months). ILM flaps, intraocular tamponade using long-acting gas, better preoperative BCVA, smaller iFTMH size, and phakic status were also associated with improved postoperative BCVA. Conclusions: Symptom duration was independently associated with both anatomic and visual outcomes in persons undergoing surgery for iFTMH. Time to surgery should be minimized and care pathways designed to enable this

    Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The motor-driven predictions about expected sensory feedback (efference copies) have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs) were recorded in response to upward pitch shift stimuli (PSS) with five different magnitudes (0, +50, +100, +200 and +400 cents) at voice onset during active vocal production and passive listening to the playback.</p> <p>Results</p> <p>Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents), became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli.</p> <p>Conclusions</p> <p>Findings of the present study suggest that the brain utilizes the motor predictions (efference copies) to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.</p

    Functional MRI of Auditory Responses in the Zebra Finch Forebrain Reveals a Hierarchical Organisation Based on Signal Strength but Not Selectivity

    Get PDF
    BACKGROUND: Male songbirds learn their songs from an adult tutor when they are young. A network of brain nuclei known as the 'song system' is the likely neural substrate for sensorimotor learning and production of song, but the neural networks involved in processing the auditory feedback signals necessary for song learning and maintenance remain unknown. Determining which regions show preferential responsiveness to the bird's own song (BOS) is of great importance because neurons sensitive to self-generated vocalisations could mediate this auditory feedback process. Neurons in the song nuclei and in a secondary auditory area, the caudal medial mesopallium (CMM), show selective responses to the BOS. The aim of the present study is to investigate the emergence of BOS selectivity within the network of primary auditory sub-regions in the avian pallium. METHODS AND FINDINGS: Using blood oxygen level-dependent (BOLD) fMRI, we investigated neural responsiveness to natural and manipulated self-generated vocalisations and compared the selectivity for BOS and conspecific song in different sub-regions of the thalamo-recipient area Field L. Zebra finch males were exposed to conspecific song, BOS and to synthetic variations on BOS that differed in spectro-temporal and/or modulation phase structure. We found significant differences in the strength of BOLD responses between regions L2a, L2b and CMM, but no inter-stimuli differences within regions. In particular, we have shown that the overall signal strength to song and synthetic variations thereof was different within two sub-regions of Field L2: zone L2a was significantly more activated compared to the adjacent sub-region L2b. CONCLUSIONS: Based on our results we suggest that unlike nuclei in the song system, sub-regions in the primary auditory pallium do not show selectivity for the BOS, but appear to show different levels of activity with exposure to any sound according to their place in the auditory processing stream

    The Sensory Consequences of Speaking: Parametric Neural Cancellation during Speech in Auditory Cortex

    Get PDF
    When we speak, we provide ourselves with auditory speech input. Efficient monitoring of speech is often hypothesized to depend on matching the predicted sensory consequences from internal motor commands (forward model) with actual sensory feedback. In this paper we tested the forward model hypothesis using functional Magnetic Resonance Imaging. We administered an overt picture naming task in which we parametrically reduced the quality of verbal feedback by noise masking. Presentation of the same auditory input in the absence of overt speech served as listening control condition. Our results suggest that a match between predicted and actual sensory feedback results in inhibition of cancellation of auditory activity because speaking with normal unmasked feedback reduced activity in the auditory cortex compared to listening control conditions. Moreover, during self-generated speech, activation in auditory cortex increased as the feedback quality of the self-generated speech decreased. We conclude that during speaking early auditory cortex is involved in matching external signals with an internally generated model or prediction of sensory consequences, the locus of which may reside in auditory or higher order brain areas. Matching at early auditory cortex may provide a very sensitive monitoring mechanism that highlights speech production errors at very early levels of processing and may efficiently determine the self-agency of speech input

    Weak Responses to Auditory Feedback Perturbation during Articulation in Persons Who Stutter: Evidence for Abnormal Auditory-Motor Transformation

    Get PDF
    Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking) functions abnormally in the speech motor systems of persons who stutter (PWS). Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants’ compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [Ξ΅]. The PWS showed compensatory responses that were qualitatively similar to the controls’ and had close-to-normal latencies (~150 ms), but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05). Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands

    Human Auditory Cortical Activation during Self-Vocalization

    Get PDF
    During speaking, auditory feedback is used to adjust vocalizations. The brain systems mediating this integrative ability have been investigated using a wide range of experimental strategies. In this report we examined how vocalization alters speech-sound processing within auditory cortex by directly recording evoked responses to vocalizations and playback stimuli using intracranial electrodes implanted in neurosurgery patients. Several new findings resulted from these high-resolution invasive recordings in human subjects. Suppressive effects of vocalization were found to occur only within circumscribed areas of auditory cortex. In addition, at a smaller number of sites, the opposite pattern was seen; cortical responses were enhanced during vocalization. This increase in activity was reflected in high gamma power changes, but was not evident in the averaged evoked potential waveforms. These new findings support forward models for vocal control in which efference copies of premotor cortex activity modulate sub-regions of auditory cortex

    The Brain Atlas Concordance Problem: Quantitative Comparison of Anatomical Parcellations

    Get PDF
    Many neuroscientific reports reference discrete macro-anatomical regions of the brain which were delineated according to a brain atlas or parcellation protocol. Currently, however, no widely accepted standards exist for partitioning the cortex and subcortical structures, or for assigning labels to the resulting regions, and many procedures are being actively used. Previous attempts to reconcile neuroanatomical nomenclatures have been largely qualitative, focusing on the development of thesauri or simple semantic mappings between terms. Here we take a fundamentally different approach, discounting the names of regions and instead comparing their definitions as spatial entities in an effort to provide more precise quantitative mappings between anatomical entities as defined by different atlases. We develop an analytical framework for studying this brain atlas concordance problem, and apply these methods in a comparison of eight diverse labeling methods used by the neuroimaging community. These analyses result in conditional probabilities that enable mapping between regions across atlases, which also form the input to graph-based methods for extracting higher-order relationships between sets of regions and to procedures for assessing the global similarity between different parcellations of the same brain. At a global scale, the overall results demonstrate a considerable lack of concordance between available parcellation schemes, falling within chance levels for some atlas pairs. At a finer level, this study reveals spatial relationships between sets of defined regions that are not obviously apparent; these are of high potential interest to researchers faced with the challenge of comparing results that were based on these different anatomical models, particularly when coordinate-based data are not available. The complexity of the spatial overlap patterns revealed points to problems for attempts to reconcile anatomical parcellations and nomenclatures using strictly qualitative and/or categorical methods. Detailed results from this study are made available via an interactive web site at http://obart.info
    corecore